6 research outputs found

    Exploration of latent space of LOD2 GML dataset to identify similar buildings

    Full text link
    Explainable numerical representations of otherwise complex datasets are vital as they extract relevant information, which is more convenient to analyze and study. These latent representations help identify clusters and outliers and assess the similarity between data points. The 3-D model of buildings is one dataset that possesses inherent complexity given the variety in footprint shape, distinct roof types, walls, height, and volume. Traditionally, comparing building shapes requires matching their known properties and shape metrics with each other. However, this requires obtaining a plethora of such properties to calculate similarity. In contrast, this study utilizes an autoencoder-based method to compute the shape information in a fixed-size vector form that can be compared and grouped with the help of distance metrics. This study uses "FoldingNet," a 3D autoencoder, to generate the latent representation of each building from the obtained LOD2 GML dataset of German cities and villages. The Cosine distance is calculated for each latent vector to determine the locations of similar buildings in the city. Further, a set of geospatial tools is utilized to iteratively find the geographical clusters of buildings with similar forms. The state of Brandenburg in Germany is taken as an example to test the methodology. The study introduces a novel approach to finding similar buildings and their geographical location, which can define the neighborhood's character, history, and social setting. Further, the process can be scaled to include multiple settlements where more regional insights can be made.Comment: 10 pages, 6 figure

    Quantifying Urban Surroundings Using Deep Learning Techniques: A New Proposal

    No full text
    The assessments on human perception of urban spaces are essential for the management and upkeep of surroundings. A large part of the previous studies is dedicated towards the visual appreciation and judgement of various physical features present in the surroundings. Visual qualities of the environment stimulate feelings of safety, pleasure, and belongingness. Scaling such assessments to cover city boundaries necessitates the assistance of state-of-the-art computer vision techniques. We developed a mobile-based application to collect visual datasets in the form of street-level imagery with the help of volunteers. We further utilised the potential of deep learning-based image analysis techniques in gaining insights into such datasets. In addition, we explained our findings with the help of environment variables which are related to individual satisfaction and wellbeing

    Classification and mapping of sound sources in local urban streets through AudioSet data and Bayesian optimized Neural Networks

    No full text
    Deep learning (DL) methods have provided several breakthroughs in conventional data analysis techniques, especially with image and audio datasets. Rapid assessment and large-scale quantification of environmental attributes have been possible through such models. This study focuses on the creation of Artificial Neural Networks (ANN) and Recurrent Neural Networks (RNN) based models to classify sound sources from manually collected sound clips in local streets. A subset of an openly available AudioSet data is used to train and evaluate the model against the common sound classes present in the urban streets. The collection of audio data is done at random locations in the selected study area of 0.2 sq. km. The audio clips are further classified according to the extent of anthropogenic (mainly traffic), natural and human-based sounds present in particular locations. Rather than the manual tuning of model hyperparameters, the study utilizes Bayesian Optimization to obtain hyperparameter values of Neural Network models. The optimized models produce an overall accuracy of 89 percent and 60 percent on the evaluation set for three and fifteen-class model respectively. The model detections are mapped in the study area with the help of the Inverse Distance Weighted (IDW) spatial interpolation method

    Identifying Streetscape Features Using VHR Imagery and Deep Learning Applications

    No full text
    Deep Learning (DL) based identification and detection of elements in urban spaces through Earth Observation (EO) datasets have been widely researched and discussed. Such studies have developed state-of-the-art methods to map urban features like building footprint or roads in detail. This study delves deeper into combining multiple such studies to identify fine-grained urban features which define streetscapes. Specifically, the research focuses on employing object detection and semantic segmentation models and other computer vision methods to identify ten streetscape features such as movement corridors, roadways, sidewalks, bike paths, on-street parking, vehicles, trees, vegetation, road markings, and buildings. The training data for identifying and classifying all the elements except road markings are collected from open sources and finetuned to fit the study’s context. The training dataset is manually created and employed to delineate road markings. Apart from the model-specific evaluation on the test-set of the data, the study creates its own test dataset from the study area to analyze these models’ performance. The outputs from these models are further integrated to develop a geospatial dataset, which is additionally utilized to generate 3D views and street cross-sections for the city. The trained models and data sources are discussed in the research and are made available for urban researchers to exploit
    corecore